Goto

Collaborating Authors

 adjustable autonomy


Human-AI Teaming Co-Learning in Military Operations

Maathuis, Clara, Cools, Kasper

arXiv.org Artificial Intelligence

In a time of rapidly evolving military threats and increasingly complex operational environments, the integration of AI into military operations proves significant advantages. At the same time, this implies various challenges and risks regarding building and deploying human-AI teaming systems in an effective and ethical manner. Currently, understanding and coping with them are often tackled from an external perspective considering the human-AI teaming system as a collective agent. Nevertheless, zooming into the dynamics involved inside the system assures dealing with a broader palette of relevant multidimensional responsibility, safety, and robustness aspects. To this end, this research proposes the design of a trustworthy co-learning model for human-AI teaming in military operations that encompasses a continuous and bidirectional exchange of insights between the human and AI agents as they jointly adapt to evolving battlefield conditions. It does that by integrating four dimensions. First, adjustable autonomy for dynamically calibrating the autonomy levels of agents depending on aspects like mission state, system confidence, and environmental uncertainty. Second, multi-layered control which accounts continuous oversight, monitoring of activities, and accountability. Third, bidirectional feedback with explicit and implicit feedback loops between the agents to assure a proper communication of reasoning, uncertainties, and learned adaptations that each of the agents has. And fourth, collaborative decision-making which implies the generation, evaluation, and proposal of decisions associated with confidence levels and rationale behind them. The model proposed is accompanied by concrete exemplifications and recommendations that contribute to further developing responsible and trustworthy human-AI teaming systems in military operations.


Agents with Adjustable Autonomy

AI Magazine

This symposium was motivated by the recognition that even as autonomous system technologies mature into practical applications, humans still refuse to disappear. Humans stay in the loop, so practical applications require that the autonomous software be understandable and adjustable. Adjustable autonomy means dynamically adjusting the level of autonomy of an agent depending on the situation. For real-world teaming between humans and autonomous agents, the desired or optimal level of control can vary over time. Hence, effective autonomous agents will support adjustable autonomy, which contrasts with most work in autonomous systems, where the style of interaction between the human and the agent are fixed by design. The adjustable autonomy concept includes the ability for humans to adjust the autonomy of agents, for agents to adjust their own autonomy, and for a group of agents to adjust the autonomy relationships within the group.


Building Strong Semi-Autonomous Systems

Zilberstein, Shlomo (University of MAssachusetts)

AAAI Conferences

The vision of populating the world with autonomous systems that reduce human labor and improve safety is gradually becoming a reality. Autonomous systems have changed the way space exploration is conducted and are beginning to transform everyday life with a range of household products. In many areas, however, there are considerable barriers to the deployment of fully autonomous systems. We refer to systems that require some degree of human intervention in order to complete a task as semi-autonomous systems. We examine the broad rationale for semi-autonomy and define basic properties of such systems. Accounting for the human in the loop presents a considerable challenge for current planning techniques. We examine various design choices in the development of semi-autonomous systems and their implications on planning and execution. Finally, we discuss fruitful research directions for advancing the science of semi-autonomy.


Integrating the Human Recommendations in the Decision Process of Autonomous Agents: A Goal Biased Markov Decision Process

Cote, Nicolas (GREYC - CNRS (UMR0672), Universit&eacute) | Bouzid, Maroua (de Caen) | Mouaddib, Abdel-Illah ( GREYC - CNRS (UMR0672), Universit&eacute)

AAAI Conferences

In this paper, we address the problem of computing the policy of an autonomous agent, taking human recommendations into account which could be appropriate for mixed initiative, or adjustable autonomy. For this purpose, we present Goal Biased Markov Decision Process (GBMDP) which assume two kinds of recommendation. The human recommends to the agent to avoid some situations (represented by undesirable states), or he recommends favorable situations represented by desirable states. The agent takes those recommendations into account by updating its policy (only updating the states concerned by the recommendations, not the whole policy). We show that GBMDP is efficient and it improves the human's intervention by reducing its time of attention paid to the agent. Moreover, GBMDP optimizes robot's computation time by updating only the necessary states. We also show how GBMDP can consider more than one recommendation. Finally, our experiments show how we update policies which are intractable by standard approaches.


Towards Adjustable Autonomy for the Real World

Pynadath, D. V., Scerri, P., Tambe, M.

arXiv.org Artificial Intelligence

Adjustable autonomy refers to entities dynamically varying their own autonomy, transferring decision-making control to other entities (typically agents transferring control to human users) in key situations. Determining whether and when such transfers-of-control should occur is arguably the fundamental research problem in adjustable autonomy. Previous work has investigated various approaches to addressing this problem but has often focused on individual agent-human interactions. Unfortunately, domains requiring collaboration between teams of agents and humans reveal two key shortcomings of these previous approaches. First, these approaches use rigid one-shot transfers of control that can result in unacceptable coordination failures in multiagent settings. Second, they ignore costs (e.g., in terms of time delays or effects on actions) to an agent's team due to such transfers-of-control. To remedy these problems, this article presents a novel approach to adjustable autonomy, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from an agent to a user or vice versa) and (ii) actions to change an agent's pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high-quality individual decisions to be made with minimal disruption to the coordination of the team. We present a mathematical model of transfer-of-control strategies. The model guides and informs the operationalization of the strategies using Markov Decision Processes, which select an optimal strategy, given an uncertain environment and costs to the individuals and teams. The approach has been carefully evaluated, including via its use in a real-world, deployed multi-agent system that assists a research group in its daily activities.


A Testbed for Investigating Task Allocation Strategies between Air Traffic Controllers and Automated Agents

Schurr, Nathan (Aptima, Inc.) | Good, Richard (Aptima, Inc.) | Alexander, Amy (Aptima, Inc.) | Picciano, Paul (Aptima, Inc.) | Ganberg, Gabriel (Aptima, Inc.) | Therrien, Michael (Aptima, Inc.) | Beard, Bettina L. (NASA Ames Research Center) | Holbrook, Jon (San Jose State University Research Foundation)

AAAI Conferences

To meet the growing demands of the National Airspace System (NAS) stakeholders and provide the level of service, safety and security needed to sustain future air transport, the Next Generation Air Transportation System (NextGen) concept calls for technologies and systems offering increasing support from automated systems that provide decision-aiding and optimization capabilities. This is an exciting application for some core aspects of Artificial Intelligence research since the automation must be designed to enable the human operators to access and process a myriad of information sources, understand heightened system complexity, and maximize capacity, throughput and fuel savings in the NAS.. This paper introduces an emerging application of techniques from mixed initiative (adjustable autonomy), multi-agent systems, and task scheduling techniques to the air traffic control domain. Consequently, we have created a testbed for investigating the critical challenges in supporting the early design of systems that allow for optimal, context-sensitive function (role) allocation between air traffic controller and automated agents. A pilot study has been conducted with the testbed and preliminary results show a marked qualitative improvement in using dynamic function allocation optimization versus static function allocation.


Electric Elves: What Went Wrong and Why

Tambe, Milind (University of Southern California)

AI Magazine

Software personal assistants continue to be a topic of significant research interest. This article outlines some of the important lessons learned from a successfully-deployed team of personal assistant agents (Electric Elves) in an office environment. In the Electric Elves project, a team of almost a dozen personal assistant agents were continually active for seven months. Each elf (agent) represented one person and assisted in daily activities in an actual office environment. This project led to several important observations about privacy, adjustable autonomy, and social norms in office environments. In addition to outlining some of the key lessons learned we outline our continued research to address some of the concerns raised.


Towards Adjustable Autonomy for the Real World

Scerri, P., Pynadath, D. V., Tambe, M.

Journal of Artificial Intelligence Research

Adjustable autonomy refers to entities dynamically varying their own autonomy, transferring decision-making control to other entities (typically agents transferring control to human users) in key situations. Determining whether and when such transfers-of-control should occur is arguably the fundamental research problem in adjustable autonomy. Previous work has investigated various approaches to addressing this problem but has often focused on individual agent-human interactions. Unfortunately, domains requiring collaboration between teams of agents and humans reveal two key shortcomings of these previous approaches. First, these approaches use rigid one-shot transfers of control that can result in unacceptable coordination failures in multiagent settings. Second, they ignore costs (e.g., in terms of time delays or effects on actions) to an agent's team due to such transfers-of-control. To remedy these problems, this article presents a novel approach to adjustable autonomy, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from an agent to a user or vice versa) and (ii) actions to change an agent's pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high-quality individual decisions to be made with minimal disruption to the coordination of the team. We present a mathematical model of transfer-of-control strategies. The model guides and informs the operationalization of the strategies using Markov Decision Processes, which select an optimal strategy, given an uncertain environment and costs to the individuals and teams. The approach has been carefully evaluated, including via its use in a real-world, deployed multi-agent system that assists a research group in its daily activities.